12 research outputs found

    Building an Aerial-Ground Robotics System for Precision Farming: An Adaptable Solution

    Full text link
    The application of autonomous robots in agriculture is gaining increasing popularity thanks to the high impact it may have on food security, sustainability, resource use efficiency, reduction of chemical treatments, and the optimization of human effort and yield. With this vision, the Flourish research project aimed to develop an adaptable robotic solution for precision farming that combines the aerial survey capabilities of small autonomous unmanned aerial vehicles (UAVs) with targeted intervention performed by multi-purpose unmanned ground vehicles (UGVs). This paper presents an overview of the scientific and technological advances and outcomes obtained in the project. We introduce multi-spectral perception algorithms and aerial and ground-based systems developed for monitoring crop density, weed pressure, crop nitrogen nutrition status, and to accurately classify and locate weeds. We then introduce the navigation and mapping systems tailored to our robots in the agricultural environment, as well as the modules for collaborative mapping. We finally present the ground intervention hardware, software solutions, and interfaces we implemented and tested in different field conditions and with different crops. We describe a real use case in which a UAV collaborates with a UGV to monitor the field and to perform selective spraying without human intervention.Comment: Published in IEEE Robotics & Automation Magazine, vol. 28, no. 3, pp. 29-49, Sept. 202

    L2-norm multiple kernel learning and its application to biomedical data fusion

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This paper introduces the notion of optimizing different norms in the dual problem of support vector machines with multiple kernels. The selection of norms yields different extensions of multiple kernel learning (MKL) such as <it>L</it><sub>∞</sub>, <it>L</it><sub>1</sub>, and <it>L</it><sub>2 </sub>MKL. In particular, <it>L</it><sub>2 </sub>MKL is a novel method that leads to non-sparse optimal kernel coefficients, which is different from the sparse kernel coefficients optimized by the existing <it>L</it><sub>∞ </sub>MKL method. In real biomedical applications, <it>L</it><sub>2 </sub>MKL may have more advantages over sparse integration method for thoroughly combining complementary information in heterogeneous data sources.</p> <p>Results</p> <p>We provide a theoretical analysis of the relationship between the <it>L</it><sub>2 </sub>optimization of kernels in the dual problem with the <it>L</it><sub>2 </sub>coefficient regularization in the primal problem. Understanding the dual <it>L</it><sub>2 </sub>problem grants a unified view on MKL and enables us to extend the <it>L</it><sub>2 </sub>method to a wide range of machine learning problems. We implement <it>L</it><sub>2 </sub>MKL for ranking and classification problems and compare its performance with the sparse <it>L</it><sub>∞ </sub>and the averaging <it>L</it><sub>1 </sub>MKL methods. The experiments are carried out on six real biomedical data sets and two large scale UCI data sets. <it>L</it><sub>2 </sub>MKL yields better performance on most of the benchmark data sets. In particular, we propose a novel <it>L</it><sub>2 </sub>MKL least squares support vector machine (LSSVM) algorithm, which is shown to be an efficient and promising classifier for large scale data sets processing.</p> <p>Conclusions</p> <p>This paper extends the statistical framework of genomic data fusion based on MKL. Allowing non-sparse weights on the data sources is an attractive option in settings where we believe most data sources to be relevant to the problem at hand and want to avoid a "winner-takes-all" effect seen in <it>L</it><sub>∞ </sub>MKL, which can be detrimental to the performance in prospective studies. The notion of optimizing <it>L</it><sub>2 </sub>kernels can be straightforwardly extended to ranking, classification, regression, and clustering algorithms. To tackle the computational burden of MKL, this paper proposes several novel LSSVM based MKL algorithms. Systematic comparison on real data sets shows that LSSVM MKL has comparable performance as the conventional SVM MKL algorithms. Moreover, large scale numerical experiments indicate that when cast as semi-infinite programming, LSSVM MKL can be solved more efficiently than SVM MKL.</p> <p>Availability</p> <p>The MATLAB code of algorithms implemented in this paper is downloadable from <url>http://homes.esat.kuleuven.be/~sistawww/bioi/syu/l2lssvm.html</url>.</p

    Preference for biological motion is reduced in ASD: implications for clinical trials and the search for biomarkers

    Get PDF
    Background: The neurocognitive mechanisms underlying autism spectrum disorder (ASD) remain unclear. Progress has been largely hampered by small sample sizes, variable age ranges and resulting inconsistent findings. There is a pressing need for large definitive studies to delineate the nature and extent of key case/control differences to direct research towards fruitful areas for future investigation. Here we focus on perception of biological motion, a promising index of social brain function which may be altered in ASD. In a large sample ranging from childhood to adulthood, we assess whether biological motion preference differs in ASD compared to neurotypical participants (NT), how differences are modulated by age and sex and whether they are associated with dimensional variation in concurrent or later symptomatology. Methods: Eye-tracking data were collected from 486 6-to-30-year-old autistic (N = 282) and non-autistic control (N = 204) participants whilst they viewed 28 trials pairing biological (BM) and control (non-biological, CTRL) motion. Preference for the biological motion stimulus was calculated as (1) proportion looking time difference (BM-CTRL) and (2) peak look duration difference (BM-CTRL). Results: The ASD group showed a present but weaker preference for biological motion than the NT group. The nature of the control stimulus modulated preference for biological motion in both groups. Biological motion preference did not vary with age, gender, or concurrent or prospective social communicative skill within the ASD group, although a lack of clear preference for either stimulus was associated with higher social-communicative symptoms at baseline. Limitations: The paired visual preference we used may underestimate preference for a stimulus in younger and lower IQ individuals. Our ASD group had a lower average IQ by approximately seven points. 18% of our sample was not analysed for various technical and behavioural reasons. Conclusions: Biological motion preference elicits small-to-medium-sized case–control effects, but individual differences do not strongly relate to core social autism associated symptomatology. We interpret this as an autistic difference (as opposed to a deficit) likely manifest in social brain regions. The extent to which this is an innate difference present from birth and central to the autistic phenotype, or the consequence of a life lived with ASD, is unclear

    Preference for biological motion is reduced in ASD: implications for clinical trials and the search for biomarkers.

    Get PDF
    BACKGROUND: The neurocognitive mechanisms underlying autism spectrum disorder (ASD) remain unclear. Progress has been largely hampered by small sample sizes, variable age ranges and resulting inconsistent findings. There is a pressing need for large definitive studies to delineate the nature and extent of key case/control differences to direct research towards fruitful areas for future investigation. Here we focus on perception of biological motion, a promising index of social brain function which may be altered in ASD. In a large sample ranging from childhood to adulthood, we assess whether biological motion preference differs in ASD compared to neurotypical participants (NT), how differences are modulated by age and sex and whether they are associated with dimensional variation in concurrent or later symptomatology. METHODS: Eye-tracking data were collected from 486 6-to-30-year-old autistic (N = 282) and non-autistic control (N = 204) participants whilst they viewed 28 trials pairing biological (BM) and control (non-biological, CTRL) motion. Preference for the biological motion stimulus was calculated as (1) proportion looking time difference (BM-CTRL) and (2) peak look duration difference (BM-CTRL). RESULTS: The ASD group showed a present but weaker preference for biological motion than the NT group. The nature of the control stimulus modulated preference for biological motion in both groups. Biological motion preference did not vary with age, gender, or concurrent or prospective social communicative skill within the ASD group, although a lack of clear preference for either stimulus was associated with higher social-communicative symptoms at baseline. LIMITATIONS: The paired visual preference we used may underestimate preference for a stimulus in younger and lower IQ individuals. Our ASD group had a lower average IQ by approximately seven points. 18% of our sample was not analysed for various technical and behavioural reasons. CONCLUSIONS: Biological motion preference elicits small-to-medium-sized case-control effects, but individual differences do not strongly relate to core social autism associated symptomatology. We interpret this as an autistic difference (as opposed to a deficit) likely manifest in social brain regions. The extent to which this is an innate difference present from birth and central to the autistic phenotype, or the consequence of a life lived with ASD, is unclear

    Nonlinear System Identification using Structured Kernel Based Modeling (Niet-lineaire systeemidentificatie via gestructureerde kernel-gebaseerde modellering)

    No full text
    This thesis discusses nonlinear system identification using kernel based models. Starting from a least squares support vector machine base model, additional structure is integrated to tailor the method for more classes of systems. While the basic formulation naturally only handles nonlinear autoregressive models with exogenous inputs, this text proposes several other model structures. One major goal of this work was to exploit convex formulations or to look for convex approximations in case a convex formulation is not feasible. Two key enabling techniques used extensively within this thesis are overparametrization and nonquadratic regularization. The former can be utilized to handle nonconvexity due to bilinear products. During this work overparametrization has been applied to handle new model structures. Furthermore it has been integrated with other techniques to handle large data sizes and a new approach to recover a parametrization in terms of the original variables has been derived. The latter technique, nonquadratic regularization, is also suitable to construct convex relaxations for nonconvex problems. In this context the major contribution of this thesis is the derivation of kernel based model representations for problems with nuclear norm as well as group-l1 norm regularization.In terms of new or improved model structures, this thesis covers a number of contributions. The first considered model class are partially linear models which combine a parametric model with a nonparametric one. These models achieve a good predictive performance while being able to incorporate physical prior knowledge in terms of the parametric model part. A novel constraint significantly reduces the variability of the parametric model part. The second part of this thesis, that exploits structure to identify a more specific model class, is the estimation of Wiener-Hammerstein systems. The main contributions in this part are a thorough evaluation on the Wiener-Hammerstein benchmark dataset as well as several improvements and extensions to the existing kernel based identification approach for Hammerstein systems. Besides targeting more restricted model structures also several extensions of the basic model class are discussed. For systems with multiple outputs a kernel based model has been derived that is able to exploit information from all outputs. Due to the reliance on the nuclear norm, the computational complexity of this model is high which currently limits its application to small scale problems. Another extension of the model class is the consideration of time dependent systems. A method that is capable of determining the times at which a nonlinear system switches its dynamics is proposed. The main feature of this method is that it is purely based on input-output measurements. The final extension of the model class considers linear noise models in combination with a nonlinear model for the system. This work proposes a convex relaxations to estimate the noise model as well as a model capturing the system dynamics by solving a joint convex optimization problem. The final contribution of this thesis is a reformulation of the classical least squares support vector formulation that allows the analysis of existing models with respect to their sensitivity to perturbations on the inputs.1 Introduction 1.1 Challenges 1.2 Objectives 1.3 Overview of chapters 1.4 Guide through the chapters 1.5 Contributions of this work Part I: Foundations 2 System identification 2.1 System properties 2.2 Prior information 2.3 Model representation 2.3.1 State-space models 2.3.2 Polynomial or difference equation models 2.4 Model parametrization and estimation 3 Convex optimization 3.1 Basic definitions and notation 3.2 Convex problems 3.3 Sparsity inducing norms 3.3.1 l1-norm 3.3.2 Group l1-norm 3.3.3 Nuclear norm 3.4 Algorithms 3.4.1 Interior point methods 3.4.2 First order algorithms 3.4.3 Related techniques 3.5 Convex relaxations 3.5.1 Norms 3.5.2 Overparametrization 4 Least Squares Support Vector Machines 4.1 Primal and dual model representations 4.1.1 Least squares loss 4.1.2 ε-insensitive loss 4.2 Estimation in reproducing kernel Hilbert spaces 4.3 Handling of large data sets 4.3.1 Nyström method 4.3.2 Approximation of the kernel matrix 4.3.3 Fixed size approach 4.3.4 Active selection of support vectors 4.4 Model selection Part II: Original work 5 Partially linear models with orthogonality 5.1 Review of kernel based partially linear models 5.2 Imposing orthogonality constraints 5.2.1 Parametric estimates under violated assumptions 5.2.2 Imposing orthogonality 5.2.3 Dual problem: model representation and estimation 5.3 Improved estimation schemes and representations 5.3.1 Separation principle 5.3.2 Equivalent kernel 5.4 Extension to different loss functions 5.5 Equivalent RKHS approach 5.5.1 Partially linear models in RKHSs 5.5.2 Empirical orthogonality in RKHSs 5.6 Experiments 5.6.1 Experimental setup 5.6.2 Toy example 5.6.3 Mass-Spring-Damper system 5.6.4 Wiener-Hammerstein benchmark data 5.7 Conclusions 6 Modeling systems with multiple outputs 6.1 Introduction 6.1.1 Possible applications 6.1.2 Technical approach and theoretic setting 6.1.3 General setting and identified difficulties 6.1.4 Structure of chapter 6.2 Formal problem formulation and motivation 6.2.1 Choice of model structure 6.2.2 Conventional estimation problem 6.2.3 Improved estimation problem 6.3 Properties of parametric estimation problem 6.3.1 Uniqueness of the solution 6.3.2 Choosing the range of the regularization parameter 6.4 Dual formulation of the model 6.4.1 Dual optimization problem 6.4.2 Properties of the dual model 6.5 Predictive model 6.6 Extensions 6.6.1 Variable input and output data 6.6.2 Overparametrized models 6.7 Numerical solution 6.7.1 Semi-definite programming representation 6.7.2 First order methods 6.8 Numerical validation 6.8.1 Experimental setup 6.8.2 Results 6.9 Conclusions 7 Block structured models 7.1 Introduction 7.2 Exploiting information on the model structure 7.2.1 Model parametrization and nonlinear estimation problem 7.2.2 Overparametrization of a simplified model 7.2.3 Convex relaxation and dual model representation 7.2.4 Recovery of the original model class 7.2.5 Numerical example 7.3 Handling of large data sets 7.3.1 A fixed-size structured model 7.3.2 A large-scale overparametrized model 7.3.3 Numerical example 7.4 Improved convex relaxation based on nuclear norms 7.4.1 Parametric approach based on the fixed size formulation 7.4.2 Kernel based approach 7.4.3 Numerical example 7.5 Results on the Wiener-Hammerstein benchmark data set 7.5.1 Description of data set 7.5.2 Model order selection 7.5.3 Performance for different number of support vectors 7.5.4 Performance based on nuclear norm regularization 7.6 Conclusions 8 Linear noise models 8.1 Incorporating linear noise models in LS-SVMs 8.2 Estimation of parametric noise models 8.2.1 Primal model 8.2.2 Solution in dual domain 8.2.3 Projection onto original class 8.3 Numerical experiments 8.3.1 Model order selection 8.3.2 Correlation of estimated parameters with true noise model 8.3.3 Performance of projected models 8.3.4 Projection quality 8.3.5 Real data 8.4 Conclusions 9 Sensitivity of kernel based models 9.1 LS-SVM models in SOCP form 9.2 Robust kernel based regression 9.2.1 Problem setting 9.2.2 Linearization 9.2.3 Convexification 9.3 Least squares kernel based model 9.3.1 Problem statement & solution 9.3.2 Predictive model 9.4 Numerical implementation 9.4.1 Optimizations 9.5 Numerical experiments 9.5.1 Sensitivity of inputs 9.5.2 Sensitivity of kernels 9.5.3 Confidence of point estimates 9.5.4 Relation between regularization parameters 9.5.5 Approximation performance of Ω_xy 9.5.6 Composite approximation performance 9.6 Conclusions 10 Segmentation of nonlinear time series 10.1 Problem Formulation 10.2 Piecewise Nonlinear Modeling 10.3 Nonparametric kernel based formulation 10.3.1 Dual formulation 10.3.2 Recovery of sparsity pattern and predictive model 10.4 Model selection 10.5 Algorithm 10.5.1 Active set strategy 10.5.2 First order algorithms 10.6 Extension to different loss functions 10.7 Experiments 10.7.1 NFIR Hammerstein system 10.7.2 NARX Wiener system 10.7.3 Algorithm 10.8 Conclusions 11 Conclusions A Appendix A.1 Proof of Theorem 6. A.2 Proof of Theorem 6.26: Singular value clipping Bibliography Curriculum vitaenrpages: 258+xvistatus: publishe

    Parameter estimation for time varying dynamical systems using least squares support vector machines

    No full text
    This paper develops a new approach based on Least Squares Support Vector Machines (LS-SVMs) for parameter estimation of time invariant as well as time varying dynamical SISO systems. Closed-form approximate models for the state and its derivative are first derived from the observed data by means of LS-SVMs. The time-derivative information is then substituted into the system of ODEs, converting the parameter estimation problem into an algebraic optimization problem. In the case of time invariant systems one can use least-squares to solve the obtained system of algebraic equations. The estimation of time-varying coefficients in SISO models, is obtained by assuming an LS-SVM model for it. © 2012 IFAC.status: publishe

    Approximate solutions to ordinary differential equations using least squares support vector machines

    No full text
    In this paper, a new approach based on least squares support vector machines (LS-SVMs) is proposed for solving linear and nonlinear ordinary differential equations (ODEs). The approximate solution is presented in closed form by means of LS-SVMs, whose parameters are adjusted to minimize an appropriate error function. For the linear and nonlinear cases, these parameters are obtained by solving a system of linear and nonlinear equations, respectively. The method is well suited to solving mildly stiff, nonstiff, and singular ODEs with initial and boundary conditions. Numerical results demonstrate the efficiency of the proposed method over existing methods.status: publishe

    Approximate Solutions to Ordinary Differential Equations Using Least Squares Support Vector Machines

    No full text
    In this paper, a new approach based on least squares support vector machines (LS-SVMs) is proposed for solving linear and nonlinear ordinary differential equations (ODEs). The approximate solution is presented in closed form by means of LS-SVMs, whose parameters are adjusted to minimize an appropriate error function. For the linear and nonlinear cases, these parameters are obtained by solving a system of linear and nonlinear equations, respectively. The method is well suited to solving mildly stiff, nonstiff, and singular ODEs with initial and boundary conditions. Numerical results demonstrate the efficiency of the proposed method over existing methods

    Least-Squares Support Vector Machines for the Identification of Wiener-Hammerstein Systems

    No full text
    This paper considers the identification of Wiener-Hammerstein systems using Least-Squares Support Vector Machines based models. The power of fully black-box NARX-type models is evaluated and compared with models incorporating information about the structure of the systems. For the NARX models it is shown how to extend the kernel-based estimator to large data sets. For the structured model the emphasis is on preserving the convexity of the estimation problem through a suitable relaxation of the original problem. To develop an empirical understanding of the implications of the different model design choices, all considered models are compared on an artificial system under a number of different experimental conditions. The obtained results are then validated on the Wiener-Hammerstein benchmark data set and the final models are presented. It is illustrated that black-box models are a suitable technique for the identification of Wiener-Hammerstein systems. The incorporation of structural information results in significant improvements in modeling performance. © 2012 Elsevier Ltd.status: publishe
    corecore